专利摘要:
A method and apparatus (10) is disclosed for generating a foreign natural language signal (10s) as an auditory stimulus to a user to enhance the fluency of a stutterer. The natural language signal 10s is independent of its concurrent speech and may be provided as a voice gesture and may be a simple vowel, or an extended or sustained voice gesture sound such as a consonant, or a vowel string. The secondary language signal 10s can be transmitted intermittently or continuously during speech, prior to and / or concurrent with the speech stuttering situation or language production. The device 10 of the present invention is configured to provide a speech-based language signal 10s so that it can be heard by the user, does not require feedback of the user's own language, and the user is substantially normal with enhanced fluency. Makes it possible to speak at speed. The apparatus 10 and methods may transmit the signal 10s manually or automatically based on the detection of a speaking or stuttering situation on the user side.
公开号:KR20040016441A
申请号:KR10-2003-7003887
申请日:2000-12-18
公开日:2004-02-21
发明作者:조셉 칼리노우스키;앤드류 스튜어트;마이클 라스태터
申请人:이스트 캐롤라이나 유니버스티;
IPC主号:
专利说明:

Methods and devices for delivering exogenously generated speech signals to enhance fluency in persons who stutter}
[2] Stuttering typically involves a variety of uses, including psychiatric treatment, drug treatment, and the use of altered auditory feedback generated by the electrical signal generator and delivered to the stutterer. It has been treated by tangible therapies. These techniques are generally characterized as endogenous changes in language signal output, such as prolonged or slowed language, rhythmic language, gestures and lipped speech, or exogenous dynamic changes in the language signal itself. It can be erased, and both can successfully elicit comparatively fluent language in speech stutterers. See O. Bloodstein, A Handbook on Stuttering (5th ed. Singular, San Diego, CA, 1995) .
[3] In general, chorus reading, shadow speech, delayed auditary feedback, and more, rather than incongruous nonverbal auditory inputs such as noise and noise isolation, or visual inputs such as glare. Extraneous auditory changes in language, such as frequency altered feedback, or visual therapy modalities, such as visual choral speech, can lead to more powerful and natural vocalization exercises in stuttering.
[4] Two types of modified auditory feedback that have been used to treat stuttering include the introduction of delayed auditory feedback (DAF) and noise-blocked or masked auditary feedback (MAF). do. Generally speaking, DAF causes a delay in the transmission of feedback language signals to the speaker / talker, and MAF functions to compete with the speaker's auditory feedback.
[5] For example, ME Wingate, Stuttering: theory and treatment, p. 237 (Irvington, 1976) describes one type of alternating auditory feedback that may include DAFs that emphasize speech, slowing speech and extending the duration of syllables so that each syllable is clearly pronounced. . However, it is believed that this type of auditory feedback or fluency enhancement can be achieved regardless of the use of DAF, as long as syllable prolongation is typically used. For example, WH Perkins, From Psychoanalysis to Discoordination, in HH Gregory (Ed.) Controversies about stuttering therapy, pp. 97-127 (University Press, 1979). Andrew Stuart et al., Fluent Speech, Fast Articulatory Rate, and Delayed Auditory Feedback: Creating a Crisis for A Scientific 0 Revolution , 82 Perceptual and Motor Skills, pp. 211-218 (1996).
[6] Generally speaking, the reduction in stuttering frequency under language signal modification is due to modified rhythm, distraction, modified vocalization, and rate reduction. Indeed, in the past, slowed language speed has been found to be an important factor in reducing stuttering. For example, WH Perkins et al., Phone rate and the effective planning time hypothesis of stuttering, 29 Jnl. In Of Speech and Hearing Research, 747-755 (1979), the authors reported that stuttering is virtually eliminated when the speaker reduces the speech rate by about 75%. However, other reports have found that speed reduction is neither necessary nor sufficient to increase fluency. Kalinowski, et al., Stuttering amelioration at various auditory feedback delays and speech rates, European Journal of Disorders of Communication, 31, 259-269 (1996); Stuart et al., Fluent speech, fast articulatory rate, and delayed auditory feedback: Creating a crisis for a scientific revolution , Perceptual and Motor Skills, 82, 211-218 (1996); MacLeod, et al., Effect of single and combined altered auditory feedback on stuttering frequency at two speech rates, Journal of Communication Disorders, 28, 217-228 (1995); Kalinowski et al., Effect of normal and fast articulatory rates on stuttering frequency, Journal of Fluency Disorders, 20, 293-302 (1995); Hargrave et al., Effect of frequency-altered feedback on stuttering frequency at normal and fast speech rates, Journal of Speech and Hearing Research, 37, 1313-1319 (1994); and Kalinowski et al., Effects of alterations in auditory feedback and speech rate on stuttering frequency, Language and Speech, 36, 1-16 (1993).
[7] Recently, US Pat. No. 5,961,443 to Rastatter et al. Discloses portable treatment devices and related stuttering enhancing treatment methods, the contents of which are hereby incorporated by reference in their entirety. These devices and methods utilize a manner in which modified auditory feedback (acoustic delay and / or frequency shift signal) is conveyed to the stutterer through a handheld device. Notwithstanding the above, there is a need to provide improved methods and devices for treating stuttering in an effective and easily performed manner.
[1] The present invention relates to an apparatus and a method for enhancing fluency of a person who stutters.
[71] 1 is a view schematically showing an embodiment of the apparatus according to the present invention, which is configured to transmit an exotic speech generated natural language signal to the user as an auditory stimulus.
[72] 2 is a block diagram of the steps of one method according to the present invention for enhancing the fluency of a groping person.
[73] 3 is a schematic representation of another embodiment of an apparatus according to the invention.
[74] 4 is a view schematically showing another embodiment according to the present invention.
[75] 5A is a side perspective view of a later (BTE) device in accordance with one embodiment of the present invention.
[76] 5B is a side perspective view of an intraocular (ITE) device according to one embodiment of the present invention.
[77] 6 is a schematic representation of another embodiment of the device according to the invention.
[78] 7A-7G illustrate exemplary implementations of devices capable of transmitting an exotic secondary voice signal in accordance with the present invention.
[79] 8 is a graph of first experimental results according to the present invention, showing the mean stutter frequency as a function of auditory feedback.
[80] 9 is a graph of second experimental results according to the present invention, showing the mean stutter frequency as a function of auditory feedback.
[8] These and other tasks are “secondary” generated by a sound or sounds corresponding to oral vocal utterance or natural language (independent of the speaker's / speaker's “in situ” uttered speech). It is achieved in accordance with the present invention by methods and apparatuses using exogenous language signals. Secondary foreign language signals may be generated by means other than verbal language (such as generated electrically, mechanically, or electromechanically) to simulate sounds of natural speech; The simulated sound (s) is in the form of simulating voice gestures that stimulate the speaker's auditory cortex. Secondary language signals of the present invention can be used as an alternative to DAF or MAF, which typically manipulates, modifies, interferes with, or competes with the speaker's own simultaneous speech. The secondary speech signal of the present invention is an auditory stimulus, which is an oral speech signal (ie, a speech gesture associated with a human speech code). The secondary language signal may be a stuttered or fluent language, and / or meaningful (a series of meaningful sounds that form words) or meaningless (sound (s) with no understandable or meaningful content).
[9] Preferably, the secondary language signal comprises an extended, spoken or spoken sound associated with a natural vocal gesture, such as a single syllable vowel or consonant, or a combination of vowels and / or consonants. Secondary linguistic signals of the present invention may be communicated to a user so as to be substantially associated with the linguistic production of a user / patient who is intermittent, lasts for a period of time, or is being treated for stuttering.
[10] Preferably, the secondary or outpatient auditory speech signal of the present invention is exogenously generated by the speaker or by someone other than the patient / stutterer (or, as mentioned above, to stimulate the speaker's auditory cortex, (vocal tract) generated by a device capable of substantially replicating output). It is also desirable that the secondary language signal is pre-recorded and stored prior to use, so that it can be provided simply and reliably to the speaker when desired, or can be audibly delivered (and can be repeated when appropriate).
[11] In one embodiment, the outpatient secondary language signal is an extended spoken sound (such as the last sound of the word "sudden"). The extended voice sound is more preferably a single syllable sound in a steady state. It is furthermore desirable for the extended voice sound to be a vocal cord output associated with producing a steady state vowel sound. The outpatient speech signal is regular during the speech, or even during fluent speech, such as when a person or patient who tends to stutter starts speaking and / or during a stuttering or stuttering situation. Can be provided to prevent the stuttering situation from occurring.
[12] The secondary speech signal may be provided as an array of various speech gesture sounds, the output of which may be changed to change the exogenously generated speech signal auditory stimulus provided to the patient over time.
[13] In a preferred embodiment, the secondary or foreign language signal is pre-recorded and delivered to the user at a desired or appropriate time (activated by user input or automatically activated upon stutter detection). The volume and / or duty cycle of the output is preferably variable so that the user can adjust it according to his needs. That is, in one embodiment, the user can be from a continuum ranging from continuously outputting the signal during speaking or during the desired output window to intermittently outputting the signal at a desired adjustable interval for the desired output period. In this case, the duration or frequency of the transmitted secondary language signal may be increased or decreased.
[14] Secondary speech signals may be embedded and included in portable handheld devices such as ITE (in the ear), BTE (behind the ear) or OTE (over the ear) stuttering aids. The secondary speech signal auditory stimulus may be generated from a stand-alone hand-sized device with a speaker (or audio media such as a compact disk or tape, or downloadable computer code, Or other computer readable program format), or a communication device (such as a handset or base such as a telephone or cordless telephone body, a two-way headset, etc.), or writing instruments (with a voice or microphone input) or other devices, such as writing implements, etc. In other embodiments, the secondary language signal may be a (wrist) watch, bracelet, lapel pin, necklace. It may be embedded or integrated into an audio chip or DSP integrated into an article worn on teeth or jewelry, such as necklaces and earrings, or on other short distances (such as a user can hear) such as headbands, hats, and the like.
[15] One aspect of the present invention relates to a method for enhancing fluency of a person who stutters speech, comprising the steps of: (a) generating a speech signal exogenously (independent of the patient's simultaneous speech production); (b) producing a language by a patient prone to stuttering; And (c) delivering the outpatient speech signal to the patient in the production phase so that the speech signal is audible to the patient.
[16] In a preferred embodiment, the exogenously generated speech signal is stored or pre-recorded and reproduced repeatedly, and audibly delivered to the patient at desired intervals at appropriate times. In addition, the foreign or secondary speech signal is preferably generated by a person other than the patient.
[17] Another aspect of the invention relates to an apparatus for enhancing the fluency of a groping person. The apparatus includes an audio storage medium comprising at least one auditory stimulus language signal pre-recorded and a speaker operatively associated with the audio storage medium to output a language signal. The apparatus also includes a power source connected to the audio storage medium and a speaker and an operation switch operably associated with the power source. The device may be further configured to cause the auditory stimulus or secondary speech signal to be transient during stuttering situations; Prior to the speaking situation (language production on the user side); And it is possible to repeatedly output to the user at a desired time zone corresponding to at least one of the situation during the speaking, thereby providing an auditory stimulus to improve the fluency to the user / groping.
[18] In a preferred embodiment, the apparatus comprises a user input trigger switch, operatively associated with a speaker. The user input trigger switch is adapted to accept user input and initiate a substantially immediate delivery of the auditory stimulus (secondary language signal) so that the auditory stimulus can be heard by the user. The apparatus may also include an intermittent output switch or button to allow the user to determine the length or repetition cycle of the transmitted output signal (to allow the user to change the auditory stimulus). Similarly, the device may include a selectable signal button that allows the user to select any signal to be transmitted or to automatically change the output signal over a desired time period.
[19] In one implementation, the device may further comprise a microphone and a signal processor configured to receive a signal generated by the user's words. In this embodiment, the device automatically outputs the auditory stimulus language signal to the user based on the analysis of the received signal in association with the user's speech, such that the auditory stimulus language signal is audible feedback or the user's simultaneous language. Independent of its own language (or without these), it can be provided substantially simultaneously with the user's words. Preferably, the auditory stimulus speech signal is transmitted in a manner that allows the user to speak at substantially normal speech speed.
[20] The apparatus is also arranged to monitor the signals received by the microphone and signal processor so as to confirm the beginning of language production and the end of language by the user. The device may output an auditory stimulus speech signal substantially continuously or intermittently while the user speaks (eg, concurrently with or during the user's speech).
[21] In one implementation, the apparatus may also include a detector operably associated with a processor and a receiver (microphone). The detector detects the occurrence of an actual stuttering situation and is activated when the user is aware of an impending stuttering situation or the onset of an actual stuttering situation so that the device can output an auditory stimulus speech signal to the user.
[22] As mentioned above, the auditory stimulus language signal may include a plurality of various natural language extension sounds associated with a voice gesture that are independent of the user's simultaneous speech and are intended to be output continuously to the user.
[23] Preferably, the exogenously or secondaryly spoken language signal is vocal communication, utterance, or verbal sound (s), which is inconsistent with the speech production of the stuttering secondary / user. (speech sound (s)). Accordingly, the present invention provides an auditory stimulus that can be an effective auditory mechanism that enhances the fluency of the person who stutters and also enables the user to speak at substantially normal speed without requiring the use of DAF or MAF. to provide. The secondary stimulus speech signal may be meaningful or meaningless and may be provided in an incongruous sentence or oral language in a normal or stuttered state, or in a steady state oral speech signals with moderate or extended or sustained speech gesture sounds. It may be provided as.
[24] The above and other objects and aspects of the present invention are described in more detail in the detailed description below.
[25] The invention is now described in more detail with reference to the accompanying drawings showing preferred embodiments of the invention. However, the invention may be embodied in many other forms and should not be construed as limited to the embodiments set forth herein. Like numbers refer to like elements throughout. In the drawings, layers, zones, or components may be exaggerated for clarity.
[26] As shown in Fig. 1, the device 10 is capable of supplying an exogenously generated auditory (secondary) language signal 10s to the speaker. As shown in the figure, the device 10 is preferably adapted to temporarily impinge on a speaking situation (while the patient or the user is speaking), or preferably, to transmit the language signal 10s to the user substantially simultaneously. It is. As used herein, the term "outpatient" means that caused by an external factor of the user, preferably by someone other than the patient / user, or, if it is caused by the user, This means that it is pre-recorded before use. The auditory stimulus of the present invention does not require field manipulation or feedback at the very point in time of the user's simultaneous language, and can be understood as being incompatible with the user's language content.
[27] The exogenous language signals of the present invention may be considered "secondary" language signals, where the primary language signals are typically associated with the speaker's actual language. The present invention, unlike other conventional stuttering devices and methods of treatment, uses secondary outpatient speech signals as auditory stimuli. That is, the secondary language signal is a natural or spoken language signal (speech gesture associated with the speech code) that is not generated simultaneously by the speaker itself or is not associated with the speaker's own simultaneous language. Secondary language signals are also adapted to not disturb (or delay or block or other feedback) the user's actual simultaneous spoken language. Accordingly, the secondary language signal of the present invention is independent of and separate from the user's simultaneous language and is provided as an auditory stimulus that enables the user to speak with enhanced fluency at substantially normal speed. The secondary natural language signal may be meaningful or meaningless (ie, the secondary outpatient natural language signal may have a meaning that the user can understand or have no meaning to the user, and the natural language signal may be a voice gesture. Or a collection of voice gestures). In one embodiment, the secondary language signal is provided to the patient / user in the same language as the user's primary language. On the other hand, the secondary language signal may be generated by a language different from the primary language of the user.
[28] 2 illustrates a method of enhancing fluency of a person who stutters a horse, according to one embodiment of the invention. The method includes the steps of (a) generating a second language signal exoticly (block 100); And (b) delivering the exogenously generated speech signal to the patient (during and / or near the patient's language production) so that the secondary speech signal can be heard by the patient (block 120).
[29] In one embodiment, the method also optionally includes recording or storing the voice of a person other than the patient (block 130) to provide an extraneous generated second speech signal. The recording or storage of the secondary speech signal is made in such a way that, at a suitable time or desired time frame, the secondary speech signal can be reconstructed or reproduced and repeatedly transmitted to the patient or user. In this way, patients have reliable language aids that can assist in fluency whenever needed.
[30] Secondary or foreign language signals may be stuttered or fluent. Secondary language signals may include oral sounds, such as extended speech gestures, or extended single vowels or consonants, or singular or combined vowels and / or combinations of consonants, as further described below. have. Moreover, the foreign or secondary speech signal of the present invention may be provided to the patient in an intermittent manner (eg, 25-75% duty cycle, or a combination thereof) while the patient or user speaks (ie, the patient). / Provided in an intermittent manner while producing the language on the user's side). On the other hand, the secondary speech signal may be provided such that the signal lasts for a certain time, or the speech signal may be provided to the user substantially continuously during speech production. Preferably, the secondary signal is communicated to the user in accordance with activation of the device, the language production of the user / patient, or during or during the stuttering situation of the user / patient. The secondary language signal 10s may not only be provided substantially continuously or intermittently while the speaker / user is speaking, but may also be provided in advance (and temporarily impendingly) before speaking.
[31] As mentioned above, the second or outgoing auditory language signal is preferably generated by the user or someone else who stutters. The secondary speech signal is capable of substantially replicating the voice or vocal cord or code associated with the human speech gesture sound, so that, in operation, the replicated speech speech signal may stimulate the auditory cortex of the stuttering second / user. It may be generated by a device such as an elongated tube. Of course, the stuttering person may also record a suitable (predetermined and incongruous) extended secondary speech signal (s) prior to use and later play it back as a secondary language signal. However, it may be more economical to "burn" or record a large amount of standardized secondary language signals to suit a diverse audience. Thus, it is also desirable for the speech-based language signal of the present invention to be generated and stored (recorded, "baked", and / or stored) prior to use, and reproduced or outputted conveniently and reliably at a desired time frame.
[32] In addition, it is preferred that the foreign-generated secondary language signal of the present invention is generated to include an extended oral speech gesture (emphasizing the selected oral language). More preferably, the secondary language signal is at least one oral extended syllable sound (such as the last sound of the word "sudden") or a sonorant or continuant. As used herein, the term "prolonged" means to emphasize or sustain the sound of a voice gesture on a normal language pattern, preferably in the form of a substantially constant state for at least about 2 to 30 seconds. Means to keep voice gestures. It is even more preferred that the secondary speech signal comprises a simple sustained or constant state collection of dictations in any suitable language (a language spoken by a human or other Latin language). For example, in the case of English, there are simple persisted / a /, / i /, / e /, / o /, / u /, and / y /.
[33] In another embodiment, the foreign speech language signal comprises a string of vowels, such as a three-vowel train. For example, in English, there are three vowel sequences representing three vertices of the vowel triangle / a-i-u / or other vowel sequences, or continuous vowel sounds that are uttered continuously. Similarly, the secondary speech signal may include consonant strings or continuously spoken (preferably extended or sustained) consonants and / or vowels or combinations thereof, or self-contained or continuous sounds.
[34] Preferably, the secondary speech signal may be transmitted to the user or stutterer such that it has a duration of at least about 5 seconds to 2 minutes. More preferably, the secondary language signal has a duration of at least about 5 to 10 seconds, and every 10 to 30 seconds during the ongoing language production so that the signal can be transmitted to the user intermittently throughout the user's position. Every 1-2 minutes (which may be sent intermittently closer or farther apart at the same time interval or over time), if necessary or as desired. Secondary language signals may also be recorded as a single short signal (eg, about 1 to 5 seconds), which is also to be noted that it may be looped to provide longer output secondary language signals. Should be placed in. For example, an exogenous speech signal having a length of 1 second (in duration) may be electronically looped 10 times (eg by electronic or analog means) to output a 10 second signal to the user.
[35] The output or transmission of the secondary language signal is variable and / or integrated into the device to determine the transmission output time of the secondary signal (e.g., based on the activation of the device or from the first transmitted or output secondary language signal). It may be timed or controlled by an adjusting timer. However, as mentioned above, the secondary speech signal is substantially continuous throughout the speech production of the user or patient (or provided as needed or desired as needed during or near the speech production), consistent with the user's needs. (Typically substantially overlapping with its own language production duration) or may be provided intermittently. As such, the exogenous language signal of the present invention may begin to stutter or experience stuttering situations, for example, just prior to or at the beginning of language production and / or during speech production of a speaker who tends to stutter. If present (both may be provided in a variety of ways, such as through user input or operation buttons on the device). The apparatus may also select a desired duration or output transfer cycle (not shown) by having a selectable duty cycle or timing function input.
[36] In one implementation, the secondary language signal may be provided as an arrangement of various oral or voice gesture sounds to modify the foreign speech language stimulus for the user over time. For example, an enhanced fluency treatment may be delivered to the user prior to the first foreign language signal (preferably, prior to the onset of language production or the first stuttering situation) that includes a sustained steady state / a / voice gesture sound. ), Followed by a second other foreign language signal containing a sustained / e / (preferably a language production separated in time from the beginning of a subsequent stuttering situation or perhaps a second language situation or a first language production situation, or Another spoken period), then a (repeated) first foreign signal or a third, other foreign signal such as a vowel or vowel string or other consonant that is in a sustained and substantially constant state, and the like.
[37] The methods and apparatus of the present invention may also provide for extraneous generated second language signals with a mix of selectable natural language signals, some of which may be specific for stuttering diseases or It may also provide improved results for specific users. For example, foreign or secondary language signals can be recorded on compact discs (or tapes) with multiple sound tracks, each of which has a different secondary language signal (different spoken speech or voice gestures). To provide. On the other hand, a compatible storage medium such as an audio chip or DSP unit or the like can be used to provide a selectable or compatible secondary language signal and thus a selectable or compatible auditory stimulus.
[38] Referring again to FIG. 1, the present invention includes an apparatus 10 adapted to provide, transmit, or transmit a pre-recorded or stored secondary language signal to a patient during operation. The secondary language signal 10s is preferably generated exogenously by a person other than the user. As shown in FIG. 1, the device 10 preferably comprises at least one speaker 25, a power source 27, and a language or audio signal storage medium 20. Preferably, as shown in FIG. 1, the device 10 may allow the power supply 27 (such as a battery) to be disconnected when not in use, thus preserving battery life (the device may be connected to an electrical outlet). And a user-accessible on / off actuation switch 28, if not connected by wires. The language signal storage medium 20 is operatively associated with the speaker 25 and the power source 27 so that the device 10 can output secondary language signals in its operation. Optionally, the device 10 may be operated by a remote control unit 33 ', and may include various parameters of the language signal 10s output (e.g., its volume, signal duration or length, signal sound type, etc.). ) Can be controlled by the remote control unit 33 '.
[39] The language signal 10s may be, by way of non-limiting examples, digital signal processors such as a DSP chip, audio card, sound chip, general purpose computer, compact disc, tape, computer program products (including those downloadable from an Internet site) or It may be captured and stored by any number of suitable language signal storage media 20 including processor circuitry including other sound recording or audio storage media.
[40] 3 shows another embodiment of the present invention. As shown, the apparatus 10 'includes a processor 30 operatively associated with a speaker. The processor 30 may be an analog or digital signal processor, preferably a microprocessor such as a DSP. The processor 30 provides a language signal 10s to the speaker 25 so that the language signal can be heard by the user. As shown, the apparatus 10 ′ may also include a start / stop trigger switch 33 that allows the user to generate a substantially immediate output (or termination) of the language signal 10s. As also shown, the device 10 ′ has a volume control 23 and / or variable signal output regulator 29 so that a user can adjust the output of the signal 10s according to the user's needs. It may also include. That is, as connected to the regulator 29 by a dotted line, in one embodiment, the user continuously signals the duration or frequency of the transmitted secondary language signal 10s, while speaking or during the desired output time zone t 1 . It can be increased or decreased from a continuum in the range from outputting to intermittently outputting a signal at a desired adjustable interval during the desired output time period t 1 .
[41] 4 illustrates another embodiment of the present invention. In this implementation, the device 10 "is adapted to monitor at least a portion of the user's language so as to confirm the start or end of the user's language (and thus the duration of the speaking situation). The user can use this information to automatically convey the language signal 10s simultaneously with the user's speech, without the user having to manually operate the device 10 ". On the other hand, the device 10" A detection circuit 50 may be included to detect the onset or occurrence of a stuttering situation and to transmit a language signal 10s in response to the detected transient stuttering situation. Of course, the device 10 "may additionally use a user trigger that can be manually activated. Preferably, the device 10" is as shown in FIGS. 5A and 5B. Like) OTE, BTE, or ITE device. Details and descriptions of conventional components of suitable small portable devices are described in US Pat. No. 5,962,443 to Raststter et al.
[42] As shown in Figure 4, the device 10 "includes a receiver 70, such as a microphone or a transducer, adapted to receive sound waves associated with the user's speech production during operation. 70 produces an analog input signal of sound corresponding to the user's language, Preferably, as shown in Fig. 4, the analog input signal is converted into a stream of digital input signals for subsequent analysis. In one implementation, the device 10 "includes a low pass filter 72 to block aliasing. The low pass filter 72 is located after the receiver 70 and before the A / D converter. The cutoff frequency for the low pass filter 72 is preferably sufficient to reproduce a recognizable speech sample after digitization. Typical cutoff frequency for voice is about 8 kHz. In addition, filtering higher frequencies may also remove unwanted background noise.
[43] The output of the low pass filter 72 can be input into a sample and hold circuit 74. As is known in the art, the sampling rate should exceed twice the cutoff frequency of the low pass filter 72 to reduce the likelihood that a sampling error may be introduced. Sampled signals output by the sample and hold circuit 74 are input to the A / D converter 76. Next, a digital signal stream representing a desired sampling of data, sufficient to allow the device 10 "to determine whether the user has initiated or terminated language production, determines whether language production is beginning or ending or continuing. It is input to the controller 30 'which analyzes the digital stream.
[44] As shown, the controller 30 'is in communication with a power source 27 and a speaker. In this embodiment, the device 10 "also includes a language signal chip 82 that stores the recorded audio secondary language signal 10s. Of course, the controller 30 'may be configured to retrieve the audio language signal. It may be a DSP or other signal processor that can be stored in. That is, the language signal chip 82 need not be a separate element, but is merely shown for ease of illustration. The apparatus 10 " An adjustable gain amplifier is included to adjust the output of the amplifier to a desired listening level.
[45] In operation, controller 30 'analyzes the digital stream associated with the input signal from receiver 70 to determine if the user has started speaking (an analog or digital voice signal that typically rises above a predetermined threshold level). As shown). In such a case, the controller 30 'automatically supplies power to the speaker 25 and outputs an audio signal 10s to the speaker 25. The controller 30 'may continue to monitor the samples of the digital stream to determine if speech is going on, and therefore whether the language signal should continue to operate. As mentioned above, the speech signal may be output substantially continuously while speaking or in association with speech. Once the controller 30 'determines that the speech is finished, the language signal 10s is also automatically terminated.
[46] As also shown in FIG. 4, the device 10 ″ may include an activation / deactivation circuit 60, which may stop transmission from the receiver 70 (such as a microphone) to the earphone or speaker 25. One embodiment of such a circuit is disclosed in US Pat. No. 4,464,119 to Vildgrube et al., The content of which is incorporated herein by reference in its entirety. Thus, the device 10 " When language production falls below a predetermined threshold, it can be switched off manually and / or automatically by switching off the power or by "standby."
[47] In one implementation, the device 10 "may include a stutter detection circuit 50. This detection circuit 50 is associated with a controller 30 and a digital data stream corresponding to the language of the user. The detection circuit 50 is capable of identifying irregular language production patterns that, during operation, can cause the controller 30 'to immediately send a language signal 10s to the user to enhance fluency. The device 10 " may also increase the volume of the signal if a secondary language signal is already being transmitted to the user, as described above, and convert the language signal transmitted to the user to another secondary language signal. You can also change Typical irregular language patterns can be identified by extension of sound (corresponding to partial word or word extension), repetition of sound (corresponding to partial word or word repetition) and the like. Although shown as a separate circuit from the controller 30 ', the detection circuit 50 may also be integrated into the controller 30' itself (as hardware, software or a combination thereof). Suitable means for identifying stuttering situations are described in the following references: Howell et al., Development of a two-stage procedure for the automatic recognition of dysfluencies in the speech of children who stutter: 11.ANN recognition of repetitions and prolongations with supplied word segment markers, Journal of Speech, Language, & Hearing Research. 40 (5): 1085-96, (Oct. 1997); Howell et al., Development of a two-stage procedure for the automatic recognition of dysfluencies in the speech of children who stutter: 1.Psychometric procedures appropriate for selection of training material for lexical dysfluency classifiers, Journal of Speech, Language, & Hearing Research, 40 (5): 1073-84, (Oct. 1997); Howell, et al, Automatic recognition of repetitions and prolongations in stuttered speech, CW Starkweather and HFM Peters (Eds), Proceedings of the First World Congress on Fluency Disorders, Vol. II (pp. 372-374), Nijmegen, The Netherlands: University Press Nijmegen. (1995); and Howell et al., Automatic stuttering frequency counts, W. Hulstijn, H. Peters and P. Van Lieshout (Eds.), Speech Production: Motor Control, Brain Research and Fluency Disorders, Amsterdam: Elsevier Science, 395-404 (1997 ). The contents of these references are incorporated herein by reference in their entirety.
[48] 6 illustrates an embodiment of a detection circuit 50 using a voice comparator 80 that compares language patterns of a user to identify irregular language patterns associated with the onset or occurrence (or termination) of a stuttering situation. It is shown. The voice comparator 80 is adapted to compare a fluent or normal voice signal with an irregular or stuttering voice signal to confirm the presence of a stuttering situation.
[49] As described above, the secondary language signal may be embedded and included in a portable handheld device such as an ITE (internal), BTE (after), or OTE (or higher) stuttering assist device as shown in FIGS. 5A and 5B. . The device may be formed as a monoaural or biural input device (located within, or in close proximity to one or both ears) to a user.
[50] On the other hand, the auditory language-based stimulus of the present invention can be provided in a number of ways. In some embodiments, the auditory stimulus may be generated from a stand-alone handheld size, or from a wearable device, or from a compact disc (FIG. 7C) or audiotape, or downloadable computer program code (eg, worldwide Transmitted from a computer network system), or as another computer readable program format code. The former type can be output by conventional tape players and CD players, while the latter type is a general purpose laptop (FIG. 7G), or a miniaturized, handheld, palm. Or by a wearable computer.
[51] Recently, consumer electronics companies have proposed devices that can be worn on a jacket (representing a body area network). Such a device may also include a headset that allows the user to listen to phone calls and music using the same headphones or headset, and allows the user to switch between the two modes with the remote control switch device. This technique is suitable for incorporating secondary language signals of the present invention into similar devices so that they are output as substitutes or additions to allowed music, secondary language signals, and telephone call listening. Thus, the secondary language signal may be output from the headset to transmit and output the secondary language signal to the headset while the user listens to the telephone call through the same headset during the output operation through the remote control unit. See, for example, New Wired Clothing Comes With Personal Network, cnn.com/2000/TECH/computing/08/18/wired.jacket.idg/index.html (posted on August 18, 2000). The contents of these documents are incorporated herein by reference in their entirety.
[52] On the other hand, the secondary language signal audio-stimulation of the present invention may be incorporated into conventional consumer products. For example, the audio natural language signal stimulus of the present invention may be used in communication devices with voice or microphone inputs (e.g., handset or base of a telephone or cordless telephone fuselage), or the user typically speaks at various times during operation. It is anticipated that it may be incorporated into other audio-prompter devices that can be easily accessed and used when expected. 7A indicates that the secondary language signal 10s may be transmitted from one or more bases 204 or handset 202 of phone 200. 7B shows that the signal 10s can be transmitted from the wireless telephone body 210.
[53] In other embodiments, the secondary language signal 10s may be a watch 220 (FIG. 7F), a bracelet, a lapel or shirt pin, a necklace 230 (FIG. 7E) or other proximity ( Wear may be embedded and provided in jewelry, headbands, eyeglass frames, hats and the like as the user or patient can hear. FIG. 7D shows a headphone device adapted to provide a dual auditory transmission secondary language signal 10s, shown as being output from earphone 240. 7C shows a compact disc or other audio storage medium 240, while FIG. 7G shows a computer 250 with an audio output. In either case, the outpatient auditory stimulus associated with the present invention may be an efficient auditory mechanism for enhancing the fluency of the person who stutters.
[54] Some implementations of the devices 10, 10 ', 10 "of the present invention may use an external battery pack, while others may use internal battery power. Of course, extension cords, direct power Direct power cords and trickle chargers may also be used An example of a known BTE listening aid with DSP, external battery pack and processing pack is produced by NICOLET Company (Madison, Wisconsin) Is PHOENIX.
[55] As will be appreciated by one of ordinary skill in the art, the present invention may be embodied as a method, apparatus or computer executable program. Thus, the present invention may have a hardware implementation form or implementation form combining software and hardware aspects.
[56] The invention is also described using flowchart illustrations and block diagrams. Each block (of flowchart illustrations and block diagrams), and the combination of blocks, may be implemented by computer program instructions. These program instructions may be provided to processor circuit (s) in a mobile user terminal or system such that the instructions executed on the processor circuit (s) may generate a means for performing the functions specified in the block or blocks. The computer program instructions are executed by processor circuit (s) such that a series of operational steps executed by processor circuit (s) implement the functions specified on the block or blocks in which instructions are executed on the processor circuit (s). To produce computer-implemented processes to provide steps for.
[57] Thus, the blocks support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program indicating means for performing the specified functions. It is also understood that each block, and combinations of blocks, can be implemented by special purpose hardware-based systems that perform the functions or steps specified above, or by special combinations of hardware and computer instructions. .
[58] Example
[59] Foreign stuttering and normal language signals were generated and their efficiency was compared. To determine whether stuttering reduction has been achieved and which components of the incoherent secondary language signal cause stuttering reduction (or increase fluency), compare the inherent incoherent nature of foreign stuttering language with that of incoherent fluent language (In dissonant language, the secondary language signal contains a phoneme material that is different from the phonemic material read aloud by the participants.) For this purpose, discordant language signals were used. Thus, the natural classification scheme of vowels and consonants was examined at both dynamic and relatively static vocal cord positions. Experiment I included meaningful languages: normal continued language, normal interrupted language, stuttered continued language, and stuttered suspended language. Experiment II included vowels and consonants: / a /, / a-i-u /, / s /, / s-sh-f /.
[60] Ten normal-listening adults (8 males, 2 females, average age 27.9 years, SD 9.4) who stuttered, participated in both experiments. Participants did not have any other language and speech related disorders. All participants had treatment experience but at present did not receive any regular treatment. Participants read 300 different syllable junior high-level passages with similar themes and syntactic complexity in both experiments. For both experiments, experimental conditions and passages were randomized, but balanced. Participants read at normal speed throughout the experiment and did not use any controls to reduce or stop stuttering. In both experiments, participants listened to auditory feedback through supra-aural earphones at a comfortable listening level.
[61] The first experiment required participants to listen to inconsistent fluent or stuttered language samples, provided continuously or intermittently (50% duty cycle). Both language samples were incongruous recorded text. Stuttered language samples included distinct stuttering behavior in all words.
[62] In the second experiment, participants listened to four consecutive speech signals: a steady state neutral vowel / a /; Three vowels representing three vertices of the vowel triangle / a-i-u /; Steady state consonants / s /; And three consonant strings / s-sh-f /. The consonants were chosen to be provided in the absence of vowels. Stable vowels and consonants and respective trains were used to represent different levels of proximity to verbal behavior. Participants also read the control passages with Non-altered Auditory Feedback (NAF). Stuttering episodes were calculated from participants' videotape recorded passages. Stuttering was defined as part-word repetitions, part-word extensions, and / or inaudible postural fixations.
[63] The stimulus for these samples was recorded with a digital tape recorder (SONY model 8819) in the sound room. For both experiments, normal fluent American English-speaking adult men made the vowel, consonant and fluent language samples. A stuttering adult man speaking American English made a sampled speech sample for the first experiment. Both of them produced language samples with normal vocal effort. Fluent language samples used text in junior high school textbook passages with similar themes and syntax complexity as read by the participants of the experiments.
[64] Next, the recorded signals were input to the personal computer (Apple Power Macintosh 9600/300) through the APPLE sound input port. Sampling was performed at 44 kHz. Sound analysis software (SOUND EDIT version 2) was used to introduce silence, select various stuttering moments, and loop the signals. Static intervals varied randomly from 2 to 5 seconds. Next, they were recorded on a compact disc and used to transmit the signal through a compact disc player (SONY model CFD-S28). The signals were biacoustically delivered through headphones (OPTIMUS Model PRO.50MX) at a level that the participant could comfortably hear. All participants spoke to the lapel microphones (RADIOSHACK model 33-3003), about 15 cm or less from their mouths, with approximately 0 degree orientation and -120 altitude position. The microphone output was input to a video camera (SONY model CCD-TVR 75).
[65] As a function of auditory feedback condition for Experiment 1, the mean stuttering frequency and standard error for stuttering frequency are shown in FIG. 8, with error bars representing plus 1 standard error of the mean. In the figure, "NAF" indicates non-modified auditory feedback, "FI" indicates interrupted fluent cases, "SI" indicates interrupted stuttered cases, and "SC" indicates stuttering "FC" represents a state in which fluency continues. As shown, a significant main effect of auditory feedback on stuttering frequency was observed ( p = 0.0004 ). Single- df comparisons showed a significant decrease in stuttering for all types of modified auditory feedback compared to NAF ( p <0.0001). No statistically significant difference was observed between fluent and stuttered language feedback ( p = 0.76), or between continuous and interrupted language feedback ( p = 0.10).
[66] The mean and standard errors for stuttering frequency (ie, number of stuttering episodes / 300 syllables) as a function of auditory feedback for Experiment II are shown in FIG. 9. Error bars represent plus 1 standard error of the mean. In Figure 9, "NAF" refers to non-modified auditory feedback. Significant main effect on stuttering frequency was observed ( p = 0.0006). Post hoc single- df comparisons showed a significant decrease in stuttering frequency for all forms of modified auditory feedback compared to NAF ( p <0.0001). In addition, there were statistically significant stuttering episodes when auditory feedback was vowels or vowels ( p <0.0001). Insignificant differences in stuttering frequency were observed between the single and linguistic components (p <0.40).
[67] These experiments provide empirical data that exogenous stuttered discordant or spoken language signals can induce or enhance fluency in people who stutter. Indeed, the results show that the stutter frequency can be reduced regardless of whether the foreign signal is based on the stuttered or normal language. Moreover, the use of exogenously generated speech language signals, including vowels, can provide improved efficiency in promoting fluency in people who stutter.
[68] As discussed above, stuttering seems to be a natural reward mechanism for the central level of "involuntary blocks" rather than a problem that emerges in the first place. In other words, people stutter from attempts to generate a mechanism to acoustically mitigate "involuntary interception" during language performance at the pivotal level. The open expression of stuttering is an attempt to compensate at the peripheral level for loss of control at the pivotal level, even through noticeable compensation. Thus, stuttering is assumed to be a form of reward rather than a problem in itself. Stuttering is similar to the role of fever in infectious disease states. The absence of adequate fluency-promoting gestures is assumed to be a prominent etiological factor that is manifested or expressed due to lack of inhibition on the auditory cortex, in carrying out proper planning for the smooth performance of verbal behavior. . A recent brain imaging process used choral language conditions to induce fluent language in stuttering adults and compared the brain images obtained with those obtained during stuttering situations / behaviors. For example, Fox et al., A PET Study of the neural systems of stuttering, 382 Nature pp. 158-161 (1996); Wu et al., A positron emission tomograph [18 F ] deoxyglucose study of developmental stuttering, 6 Neuroreport pp. See 501-505 (1995). Lack of activation in the auditory area was observed during motor planning of the stuttered language, but essential normalization under choral language conditions, indicating fluent enhancement potential, was observed.
[69] The foregoing is an example of the invention and should not be construed as limiting it. Although some exemplary embodiments have been described with respect to the present invention, those skilled in the art will readily appreciate that many variations are possible within the exemplary embodiments, without substantially departing from the novel teachings and advantages of the present invention. I can understand. Accordingly, all such modifications are intended to be included within the scope of this invention, as defined in the claims.
[70] In the claims, where means-plus-function clauses are used, it is intended to include equivalent structures as well as the structures and structural equivalents described herein as performing the recited functions. It is intended. Therefore, the above description is illustrative of the invention and should not be construed as limited to the specific embodiments disclosed, as well as other embodiments, modifications to the disclosed embodiments are included within the scope of the appended claims. It is intended to be. The invention is defined by the following claims, with equivalents of the claims to be included therein.
权利要求:
Claims (61)
[1" claim-type="Currently amended] Generating a second speech signal extraneously;
Producing a language defining a primary language signal corresponding to a patient's speech that is prone to stuttering during speech production; And
Improving the fluency of the patient by delivering the exogenously generated secondary speech signal to the patient temporarily in proximity to the production stage so that the secondary speech signal can be heard by the patient. How to promote.
[2" claim-type="Currently amended] The method of claim 1, wherein the secondary language signal is inconsistent with the content of the primary language signal provided in the production step.
[3" claim-type="Currently amended] The method of claim 1, wherein said delivering step is performed temporarily in advance of said producing step, in advance of said producing step.
[4" claim-type="Currently amended] The method of claim 2, wherein said patient speaks at a substantially normal speech rate during said production phase.
[5" claim-type="Currently amended] 2. The method of claim 1, wherein said foreignly generated secondary language signal comprises an extended voice gesture sound.
[6" claim-type="Currently amended] 6. The method of claim 5, wherein the extended voice gesture sound lasts for at least 5 seconds.
[7" claim-type="Currently amended] The method of claim 1, wherein the delivery step is provided intermittently during the production step.
[8" claim-type="Currently amended] 2. The method of claim 1, wherein said exogenously generated secondary speech signal comprises a substantially steady state short vowel sound that lasts for at least 5 seconds.
[9" claim-type="Currently amended] 2. The method of claim 1 wherein the exogenously generated secondary speech signal comprises a substantially constant single consonant sound having a duration of at least 5 seconds.
[10" claim-type="Currently amended] The patient of claim 1, wherein the exogenously generated speech signal comprises at least one of (a) a continuous vowel sound, (b) a continuous consonant sound, and (c) a single continuous vowel sound. And a plurality of extended sounds dictated by a person of the word.
[11" claim-type="Currently amended] 12. The method of claim 10, wherein the exogenously generated speech signal comprises a plurality of the extended voice sounds that are continuously transmitted to the patient.
[12" claim-type="Currently amended] The method of claim 1 further comprising detecting a stuttering situation.
[13" claim-type="Currently amended] 13. The method of claim 12, wherein said delivering step is performed in response to said detecting step.
[14" claim-type="Currently amended] The method of claim 1 wherein said delivering step is performed in response to user input for initiating said delivering step.
[15" claim-type="Currently amended] The method of claim 2, wherein said delivering step is delivered substantially continuously during said production step.
[16" claim-type="Currently amended] 2. The method of claim 1, further comprising storing the exogenously generated secondary signal on an audio medium, wherein the exogenously generated secondary speech signal is provided by an oral voice of a person other than the patient.
[17" claim-type="Currently amended] 17. The method of claim 16, wherein said delivering step is performed by transmitting said stored secondary signal.
[18" claim-type="Currently amended] 18. The method of claim 17, wherein the delivering step is repeated a plurality of times during the production step.
[19" claim-type="Currently amended] 17. The method of claim 16, wherein the exogenously generated speech signal comprises a plurality of different spoken sounds, each having a duration of at least about 10 seconds in length.
[20" claim-type="Currently amended] The method of claim 1, wherein said step of delivering is performed such that said secondary signal is transmitted from a source located proximate at least one ear of said patient.
[21" claim-type="Currently amended] The method of claim 1, wherein said step of delivering is performed such that said secondary speech signal is transmitted from a location away from said patient's ear while said patient speaks, and is transmitted over the air into the patient's ear. Way.
[22" claim-type="Currently amended] 20. The method of claim 19, wherein the exogenously generated speech signal is adjustable by the patient such that the patient can select the desired signal duration and volume during the delivery step.
[23" claim-type="Currently amended] 17. The method of claim 16, wherein the step of transmitting comprises transmitting the generated secondary language signal from a handset of a communication device.
[24" claim-type="Currently amended] 10. The method of claim 1, wherein said foreignly generated secondary speech signal comprises a plurality of spoken speech signals, wherein said delivering step includes changing the content of the speech signal transmitted to said patient over time. Way.
[25" claim-type="Currently amended] 2. The method of claim 1, wherein the exogenously generated secondary speech signal comprises a stuttered speech signal that is inconsistent with the content of language provided by the patient during the production phase.
[26" claim-type="Currently amended] The method of claim 1, wherein the extraneous secondary speech signal comprises a fluent verbally spoken language signal that is inconsistent with the content of the language provided by the patient during the production phase.
[27" claim-type="Currently amended] 17. The method of claim 16, wherein said delivering step is performed by any one of an OTE, BTE, and ITE device.
[28" claim-type="Currently amended] 17. The method of claim 16, wherein said storing step is performed by recording said foreignly generated secondary language signal on a compact disc.
[29" claim-type="Currently amended] The method of claim 1, wherein the secondary speech signal is incoherent, mismatched to the speech output during the production phase, and the delivery step is repeated during the production phase.
[30" claim-type="Currently amended] 17. The method of claim 16, wherein said delivering step is performed by any one of a handheld device, a general purpose computer, a wireless communication device, and a telephone.
[31" claim-type="Currently amended] 17. The method of claim 16, wherein the delivering step is by means of an instrument worn as any one of a belt clip, a watch, a hat, a lapel, a jacket, and a pin, in a position where the patient can acoustically communicate when in operation. Characterized in that it is carried out.
[32" claim-type="Currently amended] An audio storage medium comprising at least one predetermined auditory stimulus exogenous verbal language signal;
A speaker operably associated with the audio storage medium;
A power source in communication with the audio storage medium and the speaker; And
An actuating switch operably associated with the power source, wherein the auditory stimulus speech signal corresponds to at least one of during speech stuttering on the user side, prior to speech production by the user, and during speech production of the user Apparatus for providing an auditory stimulus for the user to improve the speech fluency of the user stuttering by repeatedly outputting to the user at a desired time zone.
[33" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the device further comprises a user input trigger switch operably associated with the speaker, the user input trigger switch accepting user input to receive the auditory stimulus secondary language signal. Initiating a substantially immediate delivery of the auditory stimulus secondary speech signal such that a user can hear it.
[34" claim-type="Currently amended] 33. The apparatus of claim 32, further comprising a microphone and a signal processor for receiving and analyzing language signals generated by a user's language.
[35" claim-type="Currently amended] 35. The device of claim 34, wherein the device automatically outputs the auditory stimulus language signal from the speaker to the user based on analysis of the user language, such that the auditory stimulus language signal is substantially simultaneously with the user's speech and is also user. And is provided in inconsistent with the content of the word, and delivered in a manner that allows the user to speak at a substantially normal language speed.
[36" claim-type="Currently amended] 36. The apparatus of claim 35, wherein, in operation, the device can confirm the onset and end of language production by the user by monitoring signals received by the microphone and signal processor, and intermittently while the user speaks the auditory stimulus And output a language signal.
[37" claim-type="Currently amended] 33. The device of claim 32, wherein the device provides the auditory stimulus speech signal in a manner that enables the user to speak at a substantially normal speech rate, at the same time as the user's speech, independent and mismatched with the user's simultaneous speech. Device characterized in that.
[38" claim-type="Currently amended] 38. The apparatus of claim 37, wherein the auditory stimulus speech signal is provided substantially continuously while the user speaks.
[39" claim-type="Currently amended] 38. The apparatus of claim 37, wherein the auditory stimulus speech signal is provided intermittently while the user speaks.
[40" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the device further comprises a detector operably associated with the processor and the microphone, the detector detecting the occurrence of a real stuttering situation, the stuttering situation being imminent or actual from the user's point of view. And when the onset of a stutter situation is recognized, the device is activated to provide the auditory stimulus speech signal.
[41" claim-type="Currently amended] 33. The method of claim 32, wherein the auditory stimulation secondary language signal comprises a plurality of different oral language signals, each comprising different extended speech gesture sounds, the plurality of other secondary signals being continuously output to a user at a desired time period. Apparatus characterized in that the.
[42" claim-type="Currently amended] 42. The apparatus of claim 41, wherein the plurality of voice gesture sounds are temporarily separated over time and output to the user.
[43" claim-type="Currently amended] 33. The device of claim 32, wherein the device comprises an actuating switch that enables the device to provide the auditory stimulus speech signal in close proximity to the patient prior to speaking.
[44" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the auditory stimulus speech signal comprises at least one spoken extended speech gesture sound.
[45" claim-type="Currently amended] 45. The apparatus of claim 44, wherein each of said at least one extended voice gesture sounds lasts for at least 5 seconds in an audible range of substantially constant state.
[46" claim-type="Currently amended] 45. The apparatus of claim 44, wherein the auditory stimulus speech signal comprises a steady state vowel sound that lasts for at least 5 seconds in a substantially audible range.
[47" claim-type="Currently amended] 45. The apparatus of claim 44, wherein the auditory stimulus speech signal comprises a steady state consonant sound that lasts for at least 5 seconds in a substantially audible range.
[48" claim-type="Currently amended] 45. The plurality of extended dictations of claim 44, wherein the secondary language signal comprises at least one of (a) a continuous vowel sound, (b) a continuous single consonant sound, and (c) a continuous single vowel sound. An apparatus comprising voice gesture sounds.
[49" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the secondary language signal comprises a plurality of different voice gesture sounds, each having an audible duration of at least 10 seconds in length.
[50" claim-type="Currently amended] 33. The device of claim 32, wherein the device is portable and has a size and shape such that the speaker can be located near the user's ear when in use, so that the language signal is input to at least one of the user's ears. Device.
[51" claim-type="Currently amended] 33. The device of claim 32, wherein, in operation, the speaker is in communication with the user so that the user can hear, but by being located away from the user, the speech signal is output from the speaker in the device from a location away from the patient, And move about 3 inches or more through the air before entering the patient's ear while speaking.
[52" claim-type="Currently amended] 33. The device of claim 32, wherein the device is integrated into the body of the phone.
[53" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the auditory stimulus speech signal comprises a stuttered speech signal.
[54" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the auditory stimulus speech signal comprises a normal fluent speech signal.
[55" claim-type="Currently amended] 33. The device of claim 32, wherein the device is formed as any one of an OTE, BTE, and ITE device.
[56" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the audio storage medium is a compact disc.
[57" claim-type="Currently amended] 33. The apparatus of claim 32, wherein the audio storage medium comprises a DSP.
[58" claim-type="Currently amended] 33. The device of claim 32, wherein the device is integrated into any one of a portable handheld device, a writing instrument, a general purpose computer, a wireless communication device, and a telephone.
[59" claim-type="Currently amended] 33. The device of claim 32, wherein the device is worn as any one of a belt clip, watch, hat, lapel, jacket, eyeglass frame, and pin.
[60" claim-type="Currently amended] 33. The apparatus of claim 32, further comprising a remote control unit for activating the output of the auditory language signal.
[61" claim-type="Currently amended] To enhance the fluency of the product user's speech, an audio storage medium is provided that stores foreign-generated secondary language signals, including at least one extended oral voice gesture sound generated by a person other than the person using the product. And wherein, in operation, the secondary language signal is transmitted to the user as an auditory stimulus to enhance the fluency of the person who stutters.
类似技术:
公开号 | 公开日 | 专利标题
Cho et al.2001|Articulatory and acoustic studies on domain-initial strengthening in Korean
Liberman1970|The grammars of speech and language
US10540989B2|2020-01-21|Somatic, auditory and cochlear communication system and method
Schroeder1975|Models of hearing
Shinn-Cunningham et al.2008|Selective attention in normal and impaired hearing
Scherer1995|Expression of emotion in voice and music
Arons1992|Techniques, perception, and applications of time-compressed speech
Brokx et al.1982|Intonation and the perceptual separation of simultaneous voices
Hanson et al.2001|Towards models of phonation
Lieberman1968|Primate vocalizations and human linguistic ability
Cummins2009|Rhythm as entrainment: The case of synchronous speech
US8326628B2|2012-12-04|Method of auditory display of sensor data
Stuart et al.2002|Effect of delayed auditory feedback on normal speakers at two speech rates
Stoeger et al.2012|An Asian elephant imitates human speech
Elman1981|Effects of frequency‐shifted feedback on the pitch of vocal productions
Kuhl1979|Speech perception in early infancy: Perceptual constancy for spectrally dissimilar vowel categories
Mersad et al.2012|When Mommy comes to the rescue of statistics: Infants combine top-down and bottom-up cues to segment speech
Jusczyk et al.1988|Viewing the development of speech perception as an innately guided learning process
Kuhl et al.1988|Speech as an intermodal object of perception
Bion et al.2011|Acoustic markers of prominence influence infants’ and adults’ segmentation of speech sequences
Warren1984|Perceptual restoration of obliterated sounds.
Blankenship2002|The timing of nonmodal phonation in vowels
Christophe et al.2003|Prosodic structure and syntactic acquisition: the case of the head‐direction parameter
Gratier et al.2011|Imitation and repetition of prosodic contour in vocal interaction at 3 months.
KR100619215B1|2006-09-06|Microphone and communication interface system
同族专利:
公开号 | 公开日
US6754632B1|2004-06-22|
ZA200302135B|2004-06-23|
WO2002024126A1|2002-03-28|
EP1318777A4|2008-05-28|
JP2004524058A|2004-08-12|
AU2001227297B2|2005-07-28|
NZ524747A|2005-05-27|
CN1474675A|2004-02-11|
CA2425066C|2008-08-05|
CA2425066A1|2002-03-28|
AU2729701A|2002-04-02|
EP1318777A1|2003-06-18|
IL154901D0|2003-10-31|
NO20031217D0|2003-03-17|
KR100741397B1|2007-07-20|
NO20031217L|2003-05-19|
MXPA03002333A|2004-12-03|
NO324163B1|2007-09-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2000-09-18|Priority to US09/665,192
2000-09-18|Priority to US09/665,192
2000-12-18|Application filed by 이스트 캐롤라이나 유니버스티
2000-12-18|Priority to PCT/US2000/034547
2004-02-21|Publication of KR20040016441A
2007-07-20|Application granted
2007-07-20|Publication of KR100741397B1
优先权:
申请号 | 申请日 | 专利标题
US09/665,192|2000-09-18|
US09/665,192|US6754632B1|2000-09-18|2000-09-18|Methods and devices for delivering exogenously generated speech signals to enhance fluency in persons who stutter|
PCT/US2000/034547|WO2002024126A1|2000-09-18|2000-12-18|Methods and devices for delivering exogenously generated speech signals to enhance fluency in persons who stutter|
[返回顶部]